For data \(\{(\boldsymbol{x}_i, y_i); i = 1, \ldots, N\}\) with \(\boldsymbol{x}_i \in \mathbb{R}^d\) and \(y_i \in \mathbb{R}\),
\[y_i = f(\boldsymbol{x}_i) + \varepsilon_i\]
We propose a underlying function,
\[f(\cdot) \sim\mathcal{MVN} \left( \mu(\boldsymbol{x};\boldsymbol\theta_\mu), k(\boldsymbol{x}, \boldsymbol{x'}; \boldsymbol{\theta}_k) \right)\]
where \(\mu(\cdot)\) is the mean function and \(k(\cdot)\) is the covariance kernel function, with hyperparameters \(\boldsymbol\theta_\mu\) and \(\boldsymbol\theta_k\), respectively.
If we were to take many realisations of a GP, the mean of these over the support would be the specified mean function.
For example, \(\mu(x) = \boldsymbol{0}\), \(k(x, x') = \exp\{-\|x - x'\|^2\}\):
The reconstruction \(f^*\) is dependent on the choice of \(\mu^*\). . . .
We actually want the reconstruction conditioned on observations, \(y\)
\[f^* | y \sim \mathcal{MVN}(\bar{f^*}, \mathrm{Cov}(f^*))\]
Apply GP regression with
Zero
Best fit
Squared Exponential
Matern 3/2
Comments